Goto

Collaborating Authors

 Boyd County


When the wheels come off: Lessons from Sonoma on racing, resilience, and engine oil

Popular Science

I went to Sonoma for a NASCAR race and found out heat is the bad guy, fluids are the secret weapon, and Valvoline's engineers are basically mad scientists with pit passes. We may earn revenue from the products available on this page and participate in affiliate programs. A tire is making decent progress coming out of a turn at Sonoma Raceway --except for the fact it's no longer attached to Cody Ware's No. 51 Ford Mustang. Crowds gasp, cars swerve, and the wheel menacingly rolls off, then on, and then off the track again before it finally collapses. I've never related to a tire more.


HydraRAG: Structured Cross-Source Enhanced Large Language Model Reasoning

Tan, Xingyu, Wang, Xiaoyang, Liu, Qing, Xu, Xiwei, Yuan, Xin, Zhu, Liming, Zhang, Wenjie

arXiv.org Artificial Intelligence

Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating external knowledge. Current hybrid RAG system retrieves evidence from both knowledge graphs (KGs) and text documents to support LLM reasoning. However, it faces challenges like handling multi-hop reasoning, multi-entity questions, multi-source verification, and effective graph utilization. To address these limitations, we present HydraRAG, a training-free framework that unifies graph topology, document semantics, and source reliability to support deep, faithful reasoning in LLMs. HydraRAG handles multi-hop and multi-entity problems through agent-driven exploration that combines structured and unstructured retrieval, increasing both diversity and precision of evidence. To tackle multi-source verification, HydraRAG uses a tri-factor cross-source verification (source trustworthiness assessment, cross-source corroboration, and entity-path alignment), to balance topic relevance with cross-modal agreement. By leveraging graph structure, HydraRAG fuses heterogeneous sources, guides efficient exploration, and prunes noise early. Comprehensive experiments on seven benchmark datasets show that HydraRAG achieves overall state-of-the-art results on all benchmarks with GPT-3.5-Turbo, outperforming the strong hybrid baseline ToG-2 by an average of 20.3% and up to 30.1%. Furthermore, HydraRAG enables smaller models (e.g., Llama-3.1-8B) to achieve reasoning performance comparable to that of GPT-4-Turbo. The source code is available on https://stevetantan.github.io/HydraRAG/.